|
Modified Richardson iteration is an iterative method for solving a system of linear equations. Richardson iteration was proposed by Lewis Richardson in his work dated 1910. It is similar to the Jacobi and Gauss–Seidel method. We seek the solution to a set of linear equations, expressed in matrix terms as : The Richardson iteration is : where is a scalar parameter that has to be chosen such that the sequence converges. It is easy to see that the method has the correct fixed points, because if it converges, then and has to approximate a solution of . == Convergence == Subtracting the exact solution , and introducing the notation for the error , we get the equality for the errors : Thus, : for any vector norm and the corresponding induced matrix norm. Thus, if , the method converges. Suppose that is diagonalizable and that are the eigenvalues and eigenvectors of . The error converges to if for all eigenvalues . If, e.g., all eigenvalues are positive, this can be guaranteed if is chosen such that . The optimal choice, minimizing all , is , which gives the simplest Chebyshev iteration. If there are both positive and negative eigenvalues, the method will diverge for any if the initial error has nonzero components in the corresponding eigenvectors. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Modified Richardson iteration」の詳細全文を読む スポンサード リンク
|